Goto

Collaborating Authors

 second opinion


'We didn't vote for ChatGPT': Swedish PM under fire for using AI in role

The Guardian

The Swedish prime minister, Ulf Kristersson, has come under fire after admitting that he regularly consults AI tools for a second opinion in his role running the country. Kristersson, whose Moderate party leads Sweden's centre-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. And should we think the complete opposite? Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis". "You have to be very careful," Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information. Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there.


A Layered Multi-Expert Framework for Long-Context Mental Health Assessments

Tang, Jinwen, Guo, Qiming, Sun, Wenbo, Shang, Yi

arXiv.org Artificial Intelligence

Long-form mental health assessments pose unique challenges for large language models (LLMs), which often exhibit hallucinations or inconsistent reasoning when handling extended, domain-specific contexts. We introduce Stacked Multi-Model Reasoning (SMMR), a layered framework that leverages multiple LLMs and specialized smaller models as coequal 'experts'. Early layers isolate short, discrete subtasks, while later layers integrate and refine these partial outputs through more advanced long-context models. We evaluate SMMR on the DAIC-WOZ depression-screening dataset and 48 curated case studies with psychiatric diagnoses, demonstrating consistent improvements over single-model baselines in terms of accuracy, F1-score, and PHQ-8 error reduction. By harnessing diverse 'second opinions', SMMR mitigates hallucinations, captures subtle clinical nuances, and enhances reliability in high-stakes mental health assessments. Our findings underscore the value of multi-expert frameworks for more trustworthy AI-driven screening.


Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision Making

Lu, Zhuoran, Wang, Dakuo, Yin, Ming

arXiv.org Artificial Intelligence

AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of {second opinions} may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making.


AI may be on its way to your doctor's office, but it's not ready to see patients

Los Angeles Times

What use could healthcare have for someone who makes things up, can't keep a secret, doesn't really know anything, and, when speaking, simply fills in the next word based on what's come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there. Companies pushing the latest AI technology -- known as "generative AI" -- are piling on: Google and Microsoft want to bring types of so-called large language models to healthcare. Big firms that are familiar to folks in white coats -- but maybe less so to your average Joe and Jane -- are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren't far behind. The space is crowded with startups, too.


VIDEO: Overview of radiology AI by Keith Dreyer

#artificialintelligence

Keith J. Dreyer, DO, PhD, FACR, American College of Radiology (ACR) Data Science Institute Chief Science Officer, explains the state of artificial intelligence (AI) in radiology in 2022. Although there are about 200 AI algorithms for medical imaging now cleared by the U.S. Food and Drug Administration (FDA), a recent ACR survey of its members showed AI only has about a 2% market penetration rate. "So, there is about another 98% that fall into the category of potential addressable market," Dreyer said. "Now why is that when there is a lot of enthusiasm and we are past the days from six years ago when radiologists were fearful of losing their jobs to AI because Geoffrey Hinton said we should stop training radiologists because AI will take over in another 5 years. That was in 2016, and are now past the five-year mark and it's ridiculous, because today there is an incredible shortage of radiologists."


Counterfactual Inference of Second Opinions

Benz, Nina L. Corvelo, Rodriguez, Manuel Gomez

arXiv.org Machine Learning

Automated decision support systems that are able to infer second opinions from experts can potentially facilitate a more efficient allocation of resources; they can help decide when and from whom to seek a second opinion. In this paper, we look at the design of this type of support systems from the perspective of counterfactual inference. We focus on a multiclass classification setting and first show that, if experts make predictions on their own, the underlying causal mechanism generating their predictions needs to satisfy a desirable set invariant property. Further, we show that, for any causal mechanism satisfying this property, there exists an equivalent mechanism where the predictions by each expert are generated by independent sub-mechanisms governed by a common noise. This motivates the design of a set invariant Gumbel-Max structural causal model where the structure of the noise governing the sub-mechanisms underpinning the model depends on an intuitive notion of similarity between experts which can be estimated from data. Experiments on both synthetic and real data show that our model can be used to infer second opinions more accurately than its non-causal counterpart.


AI Takes Bite Out of Dental Slide Misses by Assisting Doctors

#artificialintelligence

Your next trip to the dentist might offer a taste of AI. Pearl, a West Hollywood startup, provides AI for dental images to assist in diagnosis. It landed FDA clearance last month, the first to get such a go-ahead for dentistry AI. The approval paves the way for its use in clinics across the United States. "It's really a first of its kind for dentistry," said Ophir Tanz, co-founder and CEO of Pearl.


First FDA Approved AI Software Can Now Read Dental Xrays

#artificialintelligence

The Food and Drug Administration has approved the first artificial intelligence software to be used to interpret dental x-rays, allowing dentists to better screen for oral pathologies. Pearl's Second Opinion is the first and only FDA-cleared AI radiologic detection aid for dentists at the chairside, and it can assist dentists to discover a variety of common dental diseases such as tooth decay, calculus, and root abscesses. Pearl gathered over 100 million dental x-rays from dental practices and academic institutes to create Second Opinion. The AI platform highlights anomalies in x-rays and also acts as a patient communication tool, allowing dentists to exhibit alternative models of a patient's teeth and highlight trouble regions. Pearl's announcement is a significant step forward in the field of technology-assisted dentistry.


La veille de la cybersécurité

#artificialintelligence

But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes? A pair of Canadian mental-health researchers believe it can. In a study published in the Journal of Applied Behavior Analysis, Marc Lanovaz of Université de Montréal and Kieva Hranchuk of St. Lawrence College, in Ontario, make a case for using AI in treating behavioral problems. To find a better way, Lanovaz and Hranchuk, a professor of behavioral science and behavioral psychology at St. Lawrence, compiled simulated data from 1,024 individuals receiving treatment for behavioral issues.


Study: AI can make better clinical decisions than humans

#artificialintelligence

But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes? A pair of Canadian mental-health researchers believe it can. In a study published in the Journal of Applied Behavior Analysis, Marc Lanovaz of Université de Montréal and Kieva Hranchuk of St. Lawrence College, in Ontario, make a case for using AI in treating behavioral problems. "Medical and educational professionals frequently disagree on the effectiveness of behavioral interventions, which may cause people to receive inadequate treatment," said Lanovaz, an associate professor who heads the Applied Behavioral Research Lab at UdeM's School of Psychoeducation.